The revolution artificial intelligence (AI) has brought across industries is unprecedented, but its transformative abilities have also highlighted the importance of AI regulations. Both governments and leading tech organizations across the world are concerned about the unchecked rise of AI and the need to regulate the technology has become a major part of their discussions and future steps.
After the explosion of generative AI systems such as OpenAI’s ChatGPT in late 2022, a variety of other generative AI tools have surfaced, making 2023 the beginning of the age of augmented intelligence. Here, humans and algorithms are working together like never before, making work easier to finish with better outcomes and in less time.
Close on the heels of ChatGPT’s release, Microsoft unveiled yet another marvel that has been helping the MedTech industry: BioGPT. Trained on millions of published biomedical research articles, this generative AI tool could be the first step toward enhancing great research work and improving inefficiencies in the industry.
With the potential to massively reduce time-consuming activities and free up time for lab technicians, clinicians and researchers to gain new insights in drug development or clinical therapies, Microsoft Research mentioned that it performs at the level of human experts and outperforms other general and scientific language models.
“AI and ML play a big role in the laboratory, especially by improving a lot of business cases. For example, smooth availability of data in real time and the ability to analyze the common mistakes and failures that can be predicted before even starting the DSA [digital subtraction angiography]. The only possibility is by using AI and ML,” says Joseph Laraichi, Director and EU Lead for Life Sciences and Healthcare IoT at HCLTech.
Besides BioGPT, there are more AI tools revolutionizing the healthcare and life sciences industry and enhancing the clinician-patient relationship including:
- ClosedLoop uses AI to make accurate, explainable and actionable predictions of individual-level health risks, optimizes dosing for patients and guides behaviors to enable better outcomes.
- Blueskeye uses AI to analyze face and voice data for behavioral analysis and its machine learning (ML) interprets and personalizes the mental health and wellbeing care of an individual to help clinicians and a patient’s family in assessment and treatment.
- Kinomica uses AI to check if midostaurin—a new drug that can triple survival chances for some acute myeloid leukemia patients—will be effective in killing cancer cells and then identifies the right care and treatment that are likely to be more effective for patients.
Regulations that’s on everyone’s mind
While there’s no denying the fact that AI is being used in a variety of ways in the life sciences and healthcare industry globally, among many others, prominent leaders are also talking about the need for regulations.
“AI clearly can bring massive benefits to the economy and society, but we need to make sure this is done in a way that is safe and secure. I think the UK can play a leadership role, because ultimately, we’re only going to grapple with this problem and solve it if we work together not just with the companies, but also with countries around the world,” said Rishi Sunak, UK prime minister recently.
The EU AI Act
Transparency over the data collected to train AI algorithms has long been a concern for regulators in European countries. Remember Italy’s temporary ban on ChatGPT in April?
EU parliamentarians have now reached a common ground on a significant bill, which will now be debated by them along with the EU Council and EU Commission to finalize the details.
Meanwhile, taking a proactive approach in collaborating with EU lawmakers, Alphabet CEO Sundar Pichai is forging “An ‘AI Pact’ ahead of the EU AI Act” on a “voluntary basis ahead of the legal deadline of the AI regulation,” European commissioner for internal market, Thierry Breton tweeted on May 24.
Sunak is now in the US to meet President Joe Biden. Among their topics of discussions are AI and its regulations and Sunak is seeking to carve out a role for the UK as a global example to follow. He will also host a global summit on AI regulation in the autumn.
According to a recent survey in the UK, almost 60% of employees said they look forward to the government setting rules and bringing in restrictions on generative AI in the workplace to safeguard jobs.
Another open letter
The survey took place almost at the same time as another open letter to Congress, calling for urgent regulation to mitigate “the risk of extinction from AI”. It was signed by more than 350 industry tech experts who felt that it “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
“It’s almost akin to a war between chimps and humans. The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them. If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it,” Kevin Baragona, founder of DeepAI, told DailyMail.com
Besides Baragona, among prominent names who signed the letter were Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic, ‘godfathers of AI’ Geoffrey Hinton and Yoshua Bengio, executives from Microsoft and Google, and creator of ChatGPT and OpenAI CEO Sam Altman, who recently admitted his “worst fears” that “significant harm” could be caused to the world using ChatGPT.
“If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,” he told lawmakers. In a recent blogpost, entitled ‘Governance of Superintelligence’, he also mentioned about the need for “something like an IAEA [International Atomic Energy Authority] for superintelligence efforts”.
This agency should be responsible for reducing the “existential risk” and not “issues that should be left to individual countries, such as defining what AI should be allowed to say”, the blogpost added.
Altman’s six-nation trip
With Israel, Jordan, Qatar, the UAE and South Korea on his list, Altman started off with India as “a potential investment ground” where he met his Stanford University colleague Rohan Verma, who is the CEO of MapmyIndia, last week.
Later, at an event by The Economic Times, Altman said: “India is a country that truly embraced ChatGPT. There has been a lot of early adoption and real enthusiasm from the users. To improve government services like healthcare, countries such as India should back research on AI. Some nationally funded AI effort feels like a good idea. The main thing that I think is important is figuring out how to integrate these technologies into other services. And that is an area that I think governments are behind on, and don’t have the answers yet.”
After a meeting with students at the Indian Institute of Technology, Delhi (IITD), Altman—who discussed the opportunities with India and what the country should do in AI—also said: “We also felt the need to think about global regulation, which prevents some of the downsides from happening.”
AI regulation is something that India, like other countries, are taking seriously. Last month, Indian Union Minister of State for IT and Electronics Rajeev Chandrasekhar said the upcoming Digital India Act will be responsible for setting guardrails for AI and emerging tech through the “prism of user harm”.
“Sam Altman is obviously a smart man. He has his own ideas about how AI should be regulated. We certainly think we have some smart brains in India as well and we have our own views on how AI should have guardrails. If there is eventually a United Nations of AI – as Sam Altman wants – more power to it. But that does not stop us from doing what is right for our digital nagriks (citizens) and keeping the internet safe and trusted,” he told MoneyControl.
“Consultation has already started… In the Digital India Act, a whole chapter is going to be devoted to emerging technologies which is not about AI only, but multiple other technologies as well... On how we would regulate them through the prism of user harm,” he added.